分销转移(DS)是一个常见的问题,可恶化学习机器的性能。为了克服这个问题,我们假设现实世界的分布是由基本分布组成的,这些分布在不同域之间保持不变。我们将其称为不变的基本分布(即)假设。因此,这种不变性使知识转移到看不见的域。为了利用该假设在域概括(DG)中,我们开发了一个由门域单位(GDU)组成的模块化神经网络层。每个GDU都学会了单个基本领域的嵌入,使我们能够在训练过程中编码域相似性。在推断期间,GDU在观察和每个相应的基本分布之间进行了计算相似性,然后将其用于形成学习机的加权集合。由于我们的层是经过反向传播的训练,因此可以轻松地集成到现有的深度学习框架中。我们对Digits5,ECG,CamelyOn17,IwildCam和FMOW的评估显示出对训练的目标域的性能有显着改善,而无需从目标域访问数据。这一发现支持了即现实世界数据分布中的假设。
translated by 谷歌翻译
Determining and predicting reservoir formation properties for newly drilled wells represents a significant challenge. One of the variations of these properties evaluation is well-interval similarity. Many methodologies for similarity learning exist: from rule-based approaches to deep neural networks. Recently, articles adopted, e.g. recurrent neural networks to build a similarity model as we deal with sequential data. Such an approach suffers from short-term memory, as it pays more attention to the end of a sequence. Neural network with Transformer architecture instead cast their attention over all sequences to make a decision. To make them more efficient in terms of computational time, we introduce a limited attention mechanism similar to Informer and Performer architectures. We conduct experiments on open datasets with more than 20 wells making our experiments reliable and suitable for industrial usage. The best results were obtained with our adaptation of the Informer variant of Transformer with ROC AUC 0.982. It outperforms classical approaches with ROC AUC 0.824, Recurrent neural networks with ROC AUC 0.934 and straightforward usage of Transformers with ROC AUC 0.961.
translated by 谷歌翻译
This article presents a dataset of 10,917 news articles with hierarchical news categories collected between January 1st 2019, and December 31st 2019. We manually labelled the articles based on a hierarchical taxonomy with 17 first-level and 109 second-level categories. This dataset can be used to train machine learning models for automatically classifying news articles by topic. This dataset can be helpful for researchers working on news structuring, classification, and predicting future events based on released news.
translated by 谷歌翻译
Generic Object Tracking (GOT) is the problem of tracking target objects, specified by bounding boxes in the first frame of a video. While the task has received much attention in the last decades, researchers have almost exclusively focused on the single object setting. Multi-object GOT benefits from a wider applicability, rendering it more attractive in real-world applications. We attribute the lack of research interest into this problem to the absence of suitable benchmarks. In this work, we introduce a new large-scale GOT benchmark, LaGOT, containing multiple annotated target objects per sequence. Our benchmark allows researchers to tackle key remaining challenges in GOT, aiming to increase robustness and reduce computation through joint tracking of multiple objects simultaneously. Furthermore, we propose a Transformer-based GOT tracker TaMOS capable of joint processing of multiple objects through shared computation. TaMOs achieves a 4x faster run-time in case of 10 concurrent objects compared to tracking each object independently and outperforms existing single object trackers on our new benchmark. Finally, TaMOs achieves highly competitive results on single-object GOT datasets, setting a new state-of-the-art on TrackingNet with a success rate AUC of 84.4%. Our benchmark, code, and trained models will be made publicly available.
translated by 谷歌翻译
Legal Prompt Engineering (LPE) or Legal Prompting is a process to guide and assist a large language model (LLM) with performing a natural legal language processing (NLLP) skill. Our goal is to use LPE with LLMs over long legal documents for the Legal Judgement Prediction (LJP) task. We investigate the performance of zero-shot LPE for given facts in case-texts from the European Court of Human Rights (in English) and the Federal Supreme Court of Switzerland (in German, French and Italian). Our results show that zero-shot LPE is better compared to the baselines, but it still falls short compared to current state of the art supervised approaches. Nevertheless, the results are important, since there was 1) no explicit domain-specific data used - so we show that the transfer to the legal domain is possible for general-purpose LLMs, and 2) the LLMs where directly applied without any further training or fine-tuning - which in turn saves immensely in terms of additional computational costs.
translated by 谷歌翻译
Multilingual Neural Machine Translation (MNMT) models leverage many language pairs during training to improve translation quality for low-resource languages by transferring knowledge from high-resource languages. We study the quality of a domain-adapted MNMT model in the medical domain for English-Romanian with automatic metrics and a human error typology annotation which includes terminology-specific error categories. We compare the out-of-domain MNMT with the in-domain adapted MNMT. The in-domain MNMT model outperforms the out-of-domain MNMT in all measured automatic metrics and produces fewer terminology errors.
translated by 谷歌翻译
IMPORTANCE: An interpretable machine learning model can provide faithful explanations of each prediction and yet maintain higher performance than its black box counterpart. OBJECTIVE: To design an interpretable machine learning model which accurately predicts EEG protopatterns while providing an explanation of its predictions with assistance of a specialized GUI. To map the cEEG latent features to a 2D space in order to visualize the ictal-interictal-injury continuum and gain insight into its high-dimensional structure. DESIGN, SETTING, AND PARTICIPANTS: 50,697 50-second cEEG samples from 2,711 ICU patients collected between July 2006 and March 2020 at Massachusetts General Hospital. Samples were labeled as one of 6 EEG activities by domain experts, with 124 different experts providing annotations. MAIN OUTCOMES AND MEASURES: Our neural network is interpretable because it uses case-based reasoning: it compares a new EEG reading to a set of learned prototypical EEG samples from the training dataset. Interpretability was measured with task-specific neighborhood agreement statistics. Discriminatory performance was evaluated with AUROC and AUPRC. RESULTS: The model achieves AUROCs of 0.87, 0.93, 0.96, 0.92, 0.93, 0.80 for classes Seizure, LPD, GPD, LRDA, GRDA, Other respectively. This performance is statistically significantly higher than that of the corresponding uninterpretable (black box) model with p<0.0001. Videos of the ictal-interictal-injury continuum are provided. CONCLUSION AND RELEVANCE: Our interpretable model and GUI can act as a reference for practitioners who work with cEEG patterns. We can now better understand the relationships between different types of cEEG patterns. In the future, this system may allow for targeted intervention and training in clinical settings. It could also be used for re-confirming or providing additional information for diagnostics.
translated by 谷歌翻译
自动副标题是将视听产品的语音自动转化为短文本的任务,换句话说,字幕及其相应的时间戳。生成的字幕需要符合多个空间和时间要求(长度,阅读速度),同时与语音同步并以促进理解的方式进行分割。鉴于其相当大的复杂性,迄今为止,通过分别处理转录,翻译,分割为字幕并预测时间戳的元素来解决自动字幕。在本文中,我们提出了第一个直接自动字幕模型,该模型在单个解决方案中从源语音中生成目标语言字幕及其时间戳。与经过内外数据和外域数据训练的最先进的级联模型的比较表明,我们的系统提供了高质量的字幕,同时在整合性方面也具有竞争力,并具有维护单个模型的所有优势。
translated by 谷歌翻译
字幕(替代)的语音翻译是通过将符合特定显示指南的字幕划分插入字幕中断,将语音数据自动转化为良好的字幕。与语音翻译(ST)类似,模型训练需要并行数据,其中包括音频输入与其文本翻译配对。然而,在替代方面,还必须用字幕断裂来注释文本。到目前为止,这一要求代表了系统开发的瓶颈,如公开可用的替代公司所证实。为了填补这一空白,我们提出了一种在不干预的情况下将现有的ST Corpora转换为替代资源的方法。我们构建了一个分段模型,该模型通过以多模式的方式利用音频和文本来自动将文本片段分为适当的字幕,从而在零拍摄条件下实现了高分子的质量。对手动和自动分割培训的替代系统的比较实验导致相似的性能,显示了我们方法的有效性。
translated by 谷歌翻译
合成孔径声纳(SAS)图像对于多种应用至关重要,包括目标识别和环境分割。深度学习模型在SAS分析中取得了很大的成功。但是,这些方法提取的功能可能不适合捕获某些纹理信息。为了解决这个问题,我们介绍了直方图在SAS图像上的新应用。在深度学习模型中添加直方图层,通过合并合成和现实世界数据集的统计纹理信息来改善性能。
translated by 谷歌翻译